-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added flags to delete all dependent resources and configurations with… #437
base: master
Are you sure you want to change the base?
Conversation
… deleting a cluster, addressing issue civo#406
// Define the flags for the kubernetesRemoveCmd | ||
kubernetesRemoveCmd.Flags().BoolVar(&deleteVolumes, "delete-volumes", false, "Delete dependent volumes") | ||
kubernetesRemoveCmd.Flags().BoolVar(&keepFirewalls, "keep-firewalls", false, "Keep dependent firewalls") | ||
kubernetesRemoveCmd.Flags().BoolVar(&keepKubeconfig, "keep-kubeconfig", false, "Keep kubeconfig") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if we should be removing/managing kubeconfig context on behalf of the user. Not to mention that isn't backwards compatible. I don't know why a user would want to keep a kubeconfig of a deleted cluster in the first place. cc: @fernando-villalba
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So should I remove this flag ?
@RealHarshThakur @fernando-villalba @uzaxirr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why a user would want to keep a kubeconfig of a deleted cluster in the first place
Exactly. Deleting the kubeconfig configuration automatically with a CLI tool when the cluster is deleted is extremely commong, the gcp cli does it, kind does it. It's extremely convenient and it just makes sense to do so.
if err != nil { | ||
utility.Error("error deleting the kubernetes cluster: %s", err) | ||
os.Exit(1) | ||
} | ||
|
||
/* Poll for the deletion status |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should fix this on API side rather than client side so the clients don't have to poll like this.. cc : @uzaxirr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please correct me if I am wrong @RealHarshThakur @uzaxirr @fernando-villalba
I believe the current API logic is appropriate. The API call should end after triggering the deletion of the cluster, allowing the client-side polling mechanism to take over. This approach ensures that the API call is quick and responsive. Typically, the polling mechanism requires only 1-2 attempts with a good internet connection and up to 3 attempts with a weaker connection to confirm the deletion status.
If we move the polling logic to the API side, the API call might take longer to complete and could potentially result in timeouts or errors, especially under poor network conditions. This could negatively impact user experience and increase the complexity of the API.
I look forward to your thoughts and feedback. Please let me know what I can improve.
Thanks for the PR @harsh082ip , we'll let you know as we discuss internally on how to proceed as some of this can be done on API side so we don't have to poll from CLI and have the user waiting. |
Got it |
|
||
// Delete kubeconfig if --keep-kubeconfig flag is not set | ||
if !keepKubeconfig { | ||
kubeconfigPath := fmt.Sprintf("%s/.kube/config", os.Getenv("HOME")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you should not delete kubeconfig file because it may contain other clusters data even from different cloud providers.
As referenced in issue #406, this update includes the following enhancements:
Flags for Deletion Operations:
--delete-volumes: Deletes associated volumes when deleting a Kubernetes cluster.
--keep-firewalls: Retains dependent firewalls during deletion.
--keep-kubeconfig: Preserves Kubernetes configurations.
Additional Output and Messaging:
Implemented the second part of the acceptance criteria:
Alongside cluster deletion details, outputs information about remaining volumes.
Provides an informational message on stderr, suggesting the --delete-volumes flag for future use.
Error Handling:
Addressed potential error scenarios to ensure robust operation.
Please review and let me know if any changes are needed.